Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
Add filters

Document Type
Year range
1.
Fusion: Practice and Applications ; 11(1):26-36, 2023.
Article in English | Scopus | ID: covidwho-20235371

ABSTRACT

The expression "COVID-19” has been the fiercest but most trending Google search since it first appeared in November 2019. Due to advances in mobile technology and sensors, Healthcare systems based on the Internet of Things are conceivable. Instead of the traditional reactive healthcare systems, these new healthcare systems can be proactive and preventive. This paper suggested a framework for real-time suspect detection based on the Internet of Things. In the early phases of predicting COVID-19, the framework evaluates the existence of the virus by extracting health variables obtained in real-time from sensors and other IoT devices, in order to better understand the behavior of the virus by collecting symptom data of COVID-19, In this paper, four machine learning models (Random Forest, Decision Tree, K-Nearest Neural Network, and Artificial Neural Network) are proposed, these data and applied as a machine learning model to obtain high diagnostic accuracy, however, it is noted that there is a problem when collecting clinical fusion data that is scarce and unbalanced, so a dataset augmented by Generative Adversarial Network (GAN) was used. Several algorithms achieved high levels of accuracy (ACC), including Random Forest (99%), and Decision Tree (99%), K-Nearest Neighbour (98%), and Artificial Neural Network (99%). These results show the ability of GANs to generate data and their ability to provide relevant data to efficiently manage Covid-19 and reduce the risk of its spread through accurate diagnosis of patients and informing health authorities of suspected cases. © 2023, American Scientific Publishing Group (ASPG). All rights reserved.

2.
Signal Image Video Process ; : 1-9, 2023 May 26.
Article in English | MEDLINE | ID: covidwho-20231275

ABSTRACT

The past years of COVID-19 have attracted researchers to carry out benchmark work in face mask detection. However, the existing work does not focus on the problem of reconstructing the face area behind the mask and completing the face that can be used for face recognition. In order to address this problem, in this work we have proposed a spatial attention module-based conditional generative adversarial network method that can generate plausible images of faces without masks by removing the face masks from the face region. The method proposed in this work utilizes a self-created dataset consisting of faces with three types of face masks for training and testing purposes. With the proposed method, an SSIM value of 0.91231 which is 3.89% higher and a PSNR value of 30.9879 which is 3.17% higher has been obtained as compared to the vanilla C-GAN method.

3.
Comput Biol Med ; 163: 107113, 2023 Jun 02.
Article in English | MEDLINE | ID: covidwho-20230910

ABSTRACT

The outbreak of coronavirus disease (COVID-19) in 2019 has highlighted the need for automatic diagnosis of the disease, which can develop rapidly into a severe condition. Nevertheless, distinguishing between COVID-19 pneumonia and community-acquired pneumonia (CAP) through computed tomography scans can be challenging due to their similar characteristics. The existing methods often perform poorly in the 3-class classification task of healthy, CAP, and COVID-19 pneumonia, and they have poor ability to handle the heterogeneity of multi-centers data. To address these challenges, we design a COVID-19 classification model using global information optimized network (GIONet) and cross-centers domain adversarial learning strategy. Our approach includes proposing a 3D convolutional neural network with graph enhanced aggregation unit and multi-scale self-attention fusion unit to improve the global feature extraction capability. We also verified that domain adversarial training can effectively reduce feature distance between different centers to address the heterogeneity of multi-center data, and used specialized generative adversarial networks to balance data distribution and improve diagnostic performance. Our experiments demonstrate satisfying diagnosis results, with a mixed dataset accuracy of 99.17% and cross-centers task accuracies of 86.73% and 89.61%.

4.
Ieee Transactions on Computational Social Systems ; 2023.
Article in English | Web of Science | ID: covidwho-2328331

ABSTRACT

Social media platforms have become a vital source of information during the outbreak of the pandemic (COVID-19). The phenomena of fake information or news spread through social media have become increasingly prevalent and a powerful tool for information proliferation. Detecting fake news is crucial for the betterment of society. Existing fake news detection models focus on increasing the performance which leads to overfitting and lag generalizability. Hence, these models require training for various datasets of the same domain with significant variations in the distribution. In our work, we have addressed this overfitting issue by designing a robust distribution generalization of transformers-based generative adversarial network (RDGT-GAN) architecture, which can generalize the model for COVID-19 fake news datasets with different distributions without retraining. Based on our experimental findings, it is evident that the proposed model outperforms the current state-of-the-art (SOTA) models in terms of performance.

5.
3rd International Conference on Artificial Intelligence and Computer Engineering, ICAICE 2022 ; 12610, 2023.
Article in English | Scopus | ID: covidwho-2323482

ABSTRACT

Global pandemic due to the spread of COVID-19 has post challenges in a new dimension on facial recognition, where people start to wear masks. Under such condition, the authors consider utilizing machine learning in image inpainting to tackle the problem, by complete the possible face that is originally covered in mask. In particular, autoencoder has great potential on retaining important, general features of the image as well as the generative power of the generative adversarial network (GAN). The authors implement a combination of the two models, context encoders and explain how it combines the power of the two models and train the model with 50,000 images of influencers faces and yields a solid result that still contains space for improvements. Furthermore, the authors discuss some shortcomings with the model, their possible improvements, as well as some area of study for future investigation for applicative perspective, as well as directions to further enhance and refine the model. © 2023 SPIE.

6.
57th Annual Conference on Information Sciences and Systems, CISS 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2320107

ABSTRACT

Fitness activities are beneficial to one's health and well-being. During the Covid-19 pandemic, demand for virtual trainers increased. There are current systems that can classify different exercises, and there are other systems that provide feedback on a specific exercise. We propose a system that can simultaneously recognize a pose as well as provide real-time corrective feedback on the performed exercise with the least latency between recognition and correction. In all computer vision techniques implemented so far, occlusion and a lack of labeled data are the most significant problems in correctly detecting and providing helpful feedback. Vector geometry is employed to calculate the angles between key points detected on the body to provide the user with corrective feedback and count the repetitions of each exercise. Three different architectures-GAN, Conv-LSTM, and LSTM-RNN are experimented with, for exercise recognition. A custom dataset of Jumping Jacks, Squats, and Lunges is used to train the models. GAN achieved a 92% testing accuracy but struggled in real-time performance. The LSTM-RNN architecture yielded a 95% testing accuracy and ConvLSTM obtained an accuracy of 97% on real-time sequences. © 2023 IEEE.

7.
Eur J Radiol ; 164: 110858, 2023 Jul.
Article in English | MEDLINE | ID: covidwho-2320699

ABSTRACT

PURPOSE: To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS: This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS: GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION: The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.


Subject(s)
COVID-19 , Pneumonia , Humans , COVID-19/diagnostic imaging , Retrospective Studies , SARS-CoV-2 , Pneumonia/diagnostic imaging , Lung/diagnostic imaging
8.
International Journal of Materials Research ; 0(0), 2023.
Article in English | Web of Science | ID: covidwho-2309390

ABSTRACT

This work presents the sensitivity assessment of gallium nitride (GaN) material-based silicon-on-insulator fin field effect transistor by dielectric modulation in the nanocavity gap for label-free biosensing applications. The significant deflection is observed on the electrical characteristics such as drain current, transconductance, surface potential, energy band profile, electric field, sub-threshold slope, and threshold voltage in the presence of biomolecules owing to GaN material. Further, the device sensitivity is evaluated to identify the effectiveness of the proposed biosensor and its capability to detect the biomolecules with high precision or accuracy. The higher sensitivity is observed for Gelatin (k = 12) in terms of on-current, threshold voltage, and switching ratio by 104.88%, 82.12%, and 119.73%, respectively. This work is performed using a powerful tool, three-dimensional (3D) Sentaurus Technology computer-aided design using a well-calibrated structure. The results pave the way for GaN-SOI-FinFET to be a viable candidate for label-free dielectric modulated biosensor applications.

9.
Vis Comput ; : 1-39, 2022 Jan 08.
Article in English | MEDLINE | ID: covidwho-2289291

ABSTRACT

Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.

10.
Wearable and Neuronic Antennas for Medical and Wireless Applications ; : 37-56, 2022.
Article in English | Scopus | ID: covidwho-2293181

ABSTRACT

The arrival of COVID-19 took the very existence of human race for a toss. In countries like India, where the majority of the population is concentrated in the rural areas and are subject to an affordability and infrastructural constraint, cannot afford sophisticated COVID-19 tests. But, X-Ray is widely obtainable across both the rural and urban belts of our country and comes at an affordable cost, even free at the government hospitals. In the present research paper, we put forward a fusion-based DCGAN and CNN based neural net architecture which will generate synthetic COVID-19 infected lung X-Ray images from our fed data. Here we consider mainly two (2) output classes namely, malignant and benign. The novelty in this paper is that from the original X-Ray Image our model will generate a "predicted” image instantaneously using the DCGAN structure to understand the process of mutation. Also, the model predicts the class of the newly generated "predicted” image, whether it is COVID-19 positive or negative through the proposed CNN architecture. However, the paper that the success of deploying our model depends on the availability of the 5G network as the "predicted” X-Ray image along with the original X-Ray image of a patient needs to be transmitted to a central server from where it needs to be analyzed for further course of treatment as already specified. We have made an attempt to achieve the state of the art accuracy in our CNN model. © 2022 Scrivener Publishing LLC.

11.
Mathematics ; 11(8):1926, 2023.
Article in English | ProQuest Central | ID: covidwho-2300709

ABSTRACT

Facial-image-based age estimation is being increasingly used in various fields. Examples include statistical marketing analysis based on age-specific product preferences, medical applications such as beauty products and telemedicine, and age-based suspect tracking in intelligent surveillance camera systems. Masks are increasingly worn for hygiene, personal privacy concerns, and fashion. In particular, the acquisition of mask-occluded facial images has become more frequent due to the COVID-19 pandemic. These images cause a loss of important features and information for age estimation, which reduces the accuracy of age estimation. Existing de-occlusion studies have investigated masquerade masks that do not completely occlude the eyes, nose, and mouth;however, no studies have investigated the de-occlusion of masks that completely occlude the nose and mouth and its use for age estimation, which is the goal of this study. Accordingly, this study proposes a novel low-complexity attention-generative adversarial network (LCA-GAN) for facial age estimation that combines an attention architecture and conditional generative adversarial network (conditional GAN) to de-occlude mask-occluded human facial images. The open databases MORPH and PAL were used to conduct experiments. According to the results, the mean absolution error (MAE) of age estimation with the de-occluded facial images reconstructed using the proposed LCA-GAN is 6.64 and 6.12 years, respectively. Thus, the proposed method yielded higher age estimation accuracy than when using occluded images or images reconstructed using the state-of-the-art method.

12.
J Digit Imaging ; 2023 Apr 17.
Article in English | MEDLINE | ID: covidwho-2299980

ABSTRACT

We present a novel algorithm that is able to generate deep synthetic COVID-19 pneumonia CT scan slices using a very small sample of positive training images in tandem with a larger number of normal images. This generative algorithm produces images of sufficient accuracy to enable a DNN classifier to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID-19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19-positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. Furthermore, algorithms to produce deep synthetic images with smaller data volumes have the added benefit of reducing the barriers of data sharing between healthcare institutions. We present the cycle-consistent segmentation-generative adversarial network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19-positive CT scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19.

13.
Sensors (Basel) ; 23(8)2023 Apr 21.
Article in English | MEDLINE | ID: covidwho-2297849

ABSTRACT

Behavioral prediction modeling applies statistical techniques for classifying, recognizing, and predicting behavior using various data. However, performance deterioration and data bias problems occur in behavioral prediction. This study proposed that researchers conduct behavioral prediction using text-to-numeric generative adversarial network (TN-GAN)-based multidimensional time-series augmentation to minimize the data bias problem. The prediction model dataset in this study used nine-axis sensor data (accelerometer, gyroscope, and geomagnetic sensors). The ODROID N2+, a wearable pet device, collected and stored data on a web server. The interquartile range removed outliers, and data processing constructed a sequence as an input value for the predictive model. After using the z-score as a normalization method for sensor values, cubic spline interpolation was performed to identify the missing values. The experimental group assessed 10 dogs to identify nine behaviors. The behavioral prediction model used a hybrid convolutional neural network model to extract features and applied long short-term memory techniques to reflect time-series features. The actual and predicted values were evaluated using the performance evaluation index. The results of this study can assist in recognizing and predicting behavior and detecting abnormal behavior, capacities which can be applied to various pet monitoring systems.

14.
Materials (Basel) ; 16(8)2023 Apr 21.
Article in English | MEDLINE | ID: covidwho-2295398

ABSTRACT

Mg-Zn co-dopedGaN powders via the nitridation of a Ga-Mg-Zn metallic solution at 1000 °C for 2 h in ammonia flow were obtained. XRD patterns for the Mg-Zn co-dopedGaN powders showed a crystal size average of 46.88 nm. Scanning electron microscopy micrographs had an irregular shape, with a ribbon-like structure and a length of 8.63 µm. Energy-dispersive spectroscopy showed the incorporation of Zn (Lα 1.012 eV) and Mg (Kα 1.253 eV), while XPS measurements showed the elemental contributions of magnesium and zinc as co-dopant elements quantified in 49.31 eV and 1019.49 eV, respectively. The photoluminescence spectrum showed a fundamental emission located at 3.40 eV(364.70 nm), which was related to band-to-band transition, besides a second emission found in a range from 2.80 eV to 2.90 eV (442.85-427.58 nm), which was related to a characteristic of Mg-doped GaN and Zn-doped GaN powders. Furthermore, Raman scattering demonstrated a shoulder at 648.05 cm-1, which could indicate the incorporation of the Mg and Zn co-dopants atoms into the GaN structure. It is expected that one of the main applications of Mg-Zn co-doped GaN powders is in obtaining thin films for SARS-CoV-2 biosensors.

15.
1st IEEE International Conference on Automation, Computing and Renewable Systems, ICACRS 2022 ; : 820-826, 2022.
Article in English | Scopus | ID: covidwho-2257248

ABSTRACT

During the COVID-19 outbreak, all the physical classes suspended, and switched to online learning. The new era of learning presented several challenges for the teachers and students. The students did not have the opportunity to participate in the classroom activities successfully as a physical class due to a lack of educational creativity, a lack of digital tools, and a dependency on the internet. Strengthening self-directed learning and improving the technical infrastructure are required, to advance innovation-centric education from "teaching" to "learning" and to develop digital literacy. By incorporating technology into classroom instruction everyone can understand the concepts and realize their right to education. The recent technological advances in deep learning are referred to as Generative Adversarial Networks (GANs). The GANs used as an Assistive Technology (AT) to generate the sequence of images of the descriptive input text. The goal of this review is the Visual Storytelling by utilizing the Text-to-Image GAN which strengthens self-directed learning through visualization and improve the critical thinking, and logical reasoning. © 2022 IEEE

16.
5th International Conference on Information Technology for Education and Development, ITED 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2256372

ABSTRACT

Several alarming health challenges are urging medical experts and practitioners to research and develop new approaches to diagnose, detect and control the early spread of deadly diseases. One of the most challenging is Coronavirus Infection (Covid-19). Models have been proposed to detect and diagnose early infection of the virus to attain proper precautions against the Covid-19 virus. However, some researchers adopt parameter optimization to attain better accuracy on the Chest X-ray images of covid-19 and other related diseases. Hence, this research work adopts a hybridized cascaded feature extraction technique (Local Binary Pattern LBP and Histogram of Oriented Gradients HOG) and Convolutional Neural Network CNN for the deep learning classification model. The merging of LBP and HOG feature extraction significantly improved the performance level of the deep-learning CNN classifier. As a result, 95% accuracy, 92% precision, and 93% recall are attained by the proposed model. © 2022 IEEE.

17.
IEEE Transactions on Information Forensics and Security ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2251786

ABSTRACT

Currently, it is ever more common to access online services for activities which formerly required physical attendance. From banking operations to visa applications, a significant number of processes have been digitised, especially since the advent of the COVID-19 pandemic, requiring remote biometric authentication of the user. On the downside, some subjects intend to interfere with the normal operation of remote systems for personal profit by using fake identity documents, such as passports and ID cards. Deep learning solutions to detect such frauds have been presented in the literature. However, due to privacy concerns and the sensitive nature of personal identity documents, developing a dataset with the necessary number of examples for training deep neural networks is challenging. This work explores three methods for synthetically generating ID card images to increase the amount of data while training fraud-detection networks. These methods include computer vision algorithms and Generative Adversarial Networks. Our results indicate that databases can be supplemented with synthetic images without any loss in performance for the print/scan Presentation Attack Instrument Species (PAIS) and a loss in performance of 1% for the screen capture PAIS. Author

18.
Clinical Complementary Medicine and Pharmacology ; 1(1) (no pagination), 2021.
Article in English | EMBASE | ID: covidwho-2287214

ABSTRACT

Backgroud: The outbreak of COVID-19 has brought unprecedented perils to human health and raised public health concerns in more than two hundred countries. Safe and effective treatment scheme is needed urgently. Objective(s): To evaluate the effects of integratedTCM and western medicine treatment scheme on COVID-19. Method(s): A single-armed clinical trial was carried out in Hangzhou Xixi Hospital, an affiliated hospital with Zhejiang Chinese Medical University. 102 confirmed cases were screened out from 725 suspected cases and 93 of them were treated with integrated TCM and western medicine treatment scheme. Result(s): 83 cases were cured, 5 cases deteriorated, and 5 cases withdrew from the study. No deaths were reported. The mean relief time of fever, cough, diarrhea, and fatigue were (4.78 +/- 4.61) days, (7.22 +/- 4.99) days, (5.28 +/- 3.39) days, and (5.28 +/- 3.39) days, respectively. It took (14.84 +/- 5.50) days for SARS-CoV-2 by nucleic acid amplification-based testing to turn negative. Multivariable cox regression analysis revealed that age, BMI, PISCT, BPC, AST, CK, BS, and UPRO were independent risk factors for COVID-19 treatment. Conclusion(s): Our study suggested that integrated TCM and western medicine treatment scheme was effective for COVID-19.Copyright © 2021

19.
8th International Conference on Modelling and Development of Intelligent Systems, MDIS 2022 ; 1761 CCIS:173-187, 2023.
Article in English | Scopus | ID: covidwho-2281513

ABSTRACT

Creative industries were thought to be the most difficult avenue for Computer Science to enter and to perform well at. Fashion is an integral part of day to day life, one necessary both for displaying style, feelings and conveying artistic emotions, and for simply serving the purely functional purpose of keeping our bodies warm and protected from external factors. The Covid-19 pandemic has accelerated several trends that had been forming in the clothing and textile industry. With the large-scale adoption of Artificial Intelligence (AI) and Deep Learning technologies, the fashion industry is at a turning point. AI is now in charge of supervising the supply chain, manufacturing, delivery, marketing and targeted advertising for clothes and wearable and could soon replace designers too. Clothing design for purely digital environments such as the Metaverse, different games and other on-line specific activities is a niche with a huge potential for market growth. This article wishes to explain the way in which Big Data and Machine Learning are used to solve important issues in the fashion industry in the post-Covid context and to explore the future of clothing and apparel design via artificial generative design. We aim to explore the new opportunities offered to the development of the fashion industry and textile patterns by using of the generative models. The article focuses especially on Generative Adversarial Networks (GAN) but also briefly analyzes other generative models, their advantages and shortcomings. To this regard, we undertook several experiments that highlighted some disadvantages of GANs. Finally, we suggest future research niches and possible hindrances that an end user might face when trying to generate their own fashion models using generative deep learning technologies. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

20.
International Journal of Computing and Digital Systems ; 12(1):1161-1171, 2022.
Article in English | Scopus | ID: covidwho-2280600

ABSTRACT

Deep learning techniques, particularly convolutional neural networks (CNNs), have led to an enormous breakthrough in the field of medical imaging. Since the onset of the COVID-19 pandemic, studies based on deep learning systems have shown excellent results for diagnosis through the use of Chest X-rays. However, these methods are data sensitive, and their effectiveness depends on the availability and reliability of data. Models trained on a class-imbalanced dataset tend to be biased towards the majority class. The class-imbalanced datasets can be balanced by augmenting them with synthetically generated images. This paper proposes a method for generating synthetic COVID-19 Chest X-Rays images using Generative Adversarial Networks (GANs). The images generated using the proposed GAN were augmented to three imbalanced datasets of real images. It was observed that the performance of the CNN model for COVID-19 classification improved with the augmented images. Significant improvement was seen in the sensitivity or recall, which is a very critical metric. The sensitivity achieved by adding GAN-generated synthetic images to each of the imbalanced datasets matched the sensitivity levels of the balanced dataset. Hence, the proposed solution can be used to generate images that boost the sensitivity of COVID-19 diagnosis to the level of a balanced dataset. Furthermore, this approach of synthetic data augmentation can be used in other medical classification applications for improved diagnosis recommendations. © 2022 University of Bahrain. All rights reserved.

SELECTION OF CITATIONS
SEARCH DETAIL